perm filename SEARLE.1[W80,JMC] blob
sn#502007 filedate 1980-03-16 generic text, type C, neo UTF8
COMMENT ā VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 Searle's strawman strong AI thesis is that
C00007 ENDMK
Cā;
Searle's strawman strong AI thesis is that
(1) "appropriately programmed computers literally have
mental states, and
(2) therefore the programs are psychological theories".
He goes on to to say that
(3) "strong AI must be false since a human agent could instantiate
the program and still not have the appropriate mental states".
Each of the above points involves a misconception.
(1) Appropriately programmed computers
%2may be ascribed%1 mental states
relative to an interpretation. Freeing the ascription from an
interpretation requires that there be a unique best interpretation.
We shall see that in Searle's examples, this requirement fails.
My further intuition that much simpler system, including even
thermostats, can reasonably be ascribed beliefs differs from Searle's.
Refusing to ascribe mental qualities to simple systems seems like
refusing to admit 0 and 1 as numbers on the grounds that numbers are
not needed to discuss the null set or and that it is unnecessary to
regard a set with one member as distinct from the member. As soon as
one tries to establish general laws, one needs to admit the trivial
cases to avoid complicated statements.
(2) I don't know whether "therefore" is Searle's or ascribed
to the strong AI thesis. Alas, Newell and Simon have made the argument
that programs are theories. In my opinion, a program isn't a psychological
theory any more than a person is a psychological theory.
Associated with a program is a trivially related theory that all
persons have external behavior like that of the program. However,
an informative theory would at least identify psychological or mental
states with certain states of the program. Better theories would
relate psychological terms to properties of a whole class of programs.
(3) In daily life it is usually possible to identify
a physical person with a unique mental process. In case of actors
and schizophrenics, this isn't always possible, and Searle's
"counterexamples" depend on confusing minds and brains or programs
and hardware.
Suppose an English speaker mentally simulates a program that
manipulates Chinese ideographs well enough interact as a Chinese scholar
Li Po, and moreover identifications can be made with the data in the mind
of the man an the mental states of Li Po. We will then say that the
personality of the man (ordinarily there is but one) knows English but not
necessarily Chinese and knows how to simulate computer programs. He may
or may not know Chinese, and he may or may not know that the program knows
Chinese; he may think it's Japanese. The program knows Chinese and many
facts about Chinese culture. It may or may not know English, and it may
or may not know that it is being simulated by an English speaker.
The phenomenon is analogous to the fact that we may have
interpreters or compilers for one programming language written in
another or that in logic we may describe one logical system in
another. What the basic machine is "doing" need not be identified
with what programs running on it are "doing".